Goto

Collaborating Authors

 West Lafayette


Value Imprint: A Technique for Auditing the Human Values Embedded in RLHF Datasets

Neural Information Processing Systems

LLMs are increasingly fine-tuned using RLHF datasets to align them with human preferences and values. However, very limited research has investigated which specific human values are operationalized through these datasets. In this paper, we introduce Value Imprint, a framework for auditing and classifying the human values embedded within RLHF datasets. To investigate the viability of this framework, we conducted three case study experiments by auditing the Anthropic/hh-rlhf, OpenAI WebGPT Comparisons, and Alpaca GPT-4-LLM datasets to examine the human values embedded within them. Our analysis involved a two-phase process.


On Enhancing Structural Resilience of Multirobot Coverage Control with Bearing Rigidity

arXiv.org Artificial Intelligence

On Enhancing Structural Resilience of Multirobot Coverage Control with Bearing Rigidity Kartik A. Pant, Vishnu Vijay, Minhyun Cho, and Inseok Hwang Abstract -- The problem of multi-robot coverage control has been widely studied to efficiently coordinate a team of robots to cover a desired area of interest. However, this problem faces significant challenges when some robots are lost or deviate from their desired formation during the mission due to faults or cyberattacks. Since a majority of multi-robot systems (MRSs) rely on communication and relative sensing for their efficient operation, a failure in one robot could result in a cascade of failures in the entire system. In this work, we propose a hierarchical framework for area coverage, combining centralized coordination by leveraging Voronoi partitioning with decentralized reference tracking model predictive control (MPC) for control design. In addition to reference tracking, the decentralized MPC also performs bearing maintenance to enforce a rigid MRS network, thereby enhancing the structural resilience, i.e., the ability to detect and mitigate the effects of localization errors and robot loss during the mission. Furthermore, we show that the resulting control architecture guarantees the recovery of the MRS network in the event of robot loss while maintaining a minimally rigid structure. The effectiveness of the proposed algorithm is validated through numerical simulations. I NTRODUCTION Recent advances in multi-robot systems (MRSs), with their superior sensing, communication, and computational capabilities, allow them to perform complicated tasks otherwise impossible with only single-robot systems. MRSs have been widely adopted for numerous applications such as cooperative sensor coverage [1], search and rescue [2], and environmental monitoring [3]. In recent catastrophic wildfires in Los Angeles, drone swarms have been actively utilized for monitoring and prevention of wildfires [4]. However, as the complexity of these systems increases, the number of failure modes affecting MRS performance and safety also increases. Furthermore, the sensing [5], [6], and communication networks [7] also open up new cyberattack surfaces, network vulnerabilities, and backdoors, which adversaries can exploit to degrade and disrupt the performance of the MRS. Thus, designing control architectures ensuring the system's resiliency under these unknown failure modes becomes essential. A key application of MRSs is to cover a desired area of interest, often denoted by a density function that indicates The authors are with the School of Aeronautics and Astronautics, Purdue University, West Lafayette, IN 47906.


An Efficient Detection and Control System for Underwater Docking using Machine Learning and Realistic Simulation: A Comprehensive Approach

arXiv.org Artificial Intelligence

Underwater docking is critical to enable the persistent operation of Autonomous Underwater Vehicles (AUVs). For this, the AUV must be capable of detecting and localizing the docking station, which is complex due to the highly dynamic undersea environment. Image-based solutions offer a high acquisition rate and versatile alternative to adapt to this environment; however, the underwater environment presents challenges such as low visibility, high turbidity, and distortion. In addition to this, field experiments to validate underwater docking capabilities can be costly and dangerous due to the specialized equipment and safety considerations required to conduct the experiments. This work compares different deep-learning architectures to perform underwater docking detection and classification. The architecture with the best performance is then compressed using knowledge distillation under the teacher-student paradigm to reduce the network's memory footprint, allowing real-time implementation. To reduce the simulation-to-reality gap, a Generative Adversarial Network (GAN) is used to do image-to-image translation, converting the Gazebo simulation image into a realistic underwater-looking image. The obtained image is then processed using an underwater image formation model to simulate image attenuation over distance under different water types. The proposed method is finally evaluated according to the AUV docking success rate and compared with classical vision methods. The simulation results show an improvement of 20% in the high turbidity scenarios regardless of the underwater currents. Furthermore, we show the performance of the proposed approach by showing experimental results on the off-the-shelf AUV Iver3.


Training computers to tease out subtext behind text

#artificialintelligence

WEST LAFAYETTE, Ind. – It is hard enough for humans to interpret the deeper meaning and context of social media and news articles. Asking computers to do it is a nearly impossible task. Even C-3PO, fluent in over 6 million forms of communication, misses the subtext much of the time. Natural language processing, the subfield of artificial intelligence connecting computers with human languages, uses statistical methods to analyze language, often without incorporating the real-world context needed for understanding the shifts and currents of human society. To do that, you have to translate online communication, and the context from which it emerges, into something the computers can parse and reason over.


Taking lessons from a sea slug, study points to better hardware for artificial intelligence

#artificialintelligence

Researchers mimic the animal kingdom's most basic signs of intelligence in quantum material WEST LAFAYETTE, Ind. -- For artificial intelligence to get any smarter, it needs first to be as intelligent as one of the simplest creatures in the animal kingdom: the sea slug. A new study has found that a material can mimic the sea slug's most essential intelligence features. The discovery is a step toward building hardware that could help make AI more efficient and reliable for technology ranging from self-driving cars and surgical robots to social media algorithms. The study, publishing this week in the Proceedings of the National Academy of Sciences, was conducted by a team of researchers from Purdue University, Rutgers University, the University of Georgia and Argonne National Laboratory. "Through studying sea slugs, neuroscientists discovered the hallmarks of intelligence that are fundamental to any organism's survival," said Shriram Ramanathan, a Purdue professor of materials engineering.


Simulation Studies on Deep Reinforcement Learning for Building Control with Human Interaction

arXiv.org Artificial Intelligence

The building sector consumes the largest energy in the world, and there have been considerable research interests in energy consumption and comfort management of buildings. Inspired by recent advances in reinforcement learning (RL), this paper aims at assessing the potential of RL in building climate control problems with occupant interaction. We apply a recent RL approach, called DDPG (deep deterministic policy gradient), for the continuous building control tasks and assess its performance with simulation studies in terms of its ability to handle (a) the partial state observability due to sensor limitations; (b) complex stochastic system with high-dimensional state-spaces, which are jointly continuous and discrete; (c) uncertainties due to ambient weather conditions, occupant's behavior, and comfort feelings. Especially, the partial observability and uncertainty due to the occupant interaction significantly complicate the control problem. Through simulation studies, the policy learned by DDPG demonstrates reasonable performance and computational tractability.


Artificial intelligence collaboration seeking to hasten COVID-19 insights

#artificialintelligence

IMAGE: Purdue University is joining with other organizations for an initiative to accelerate global collaborative research on COVID-19 through access to high-quality, real-time multi-center patient datasets. WEST LAFAYETTE, Ind. - During the COVID-19 pandemic, health care professionals and researchers have been confined mostly to using local and national datasets to study the impact of comorbidities, pre-existing medication use, demographics and various interventions on disease course. Now, Purdue University is joining with other organizations for an initiative to accelerate global collaborative research on COVID-19 through access to high-quality, real-time multi-center patient datasets. The National Science Foundation has provided funding to develop the Records Evaluation for COVID-19 Emergency Research (RECovER) initiative. Researchers are testing predictions of artificial intelligence drug discovery platforms from the lab of Gaurav Chopra, an assistant professor of analytical and physical chemistry in Purdue's College of Science, on patient datasets across a network of health care institutions.


Artificial intelligence collaboration seeking to hasten COVID-19 insights - AI Development Hub

#artificialintelligence

IMAGE: Purdue College is becoming a member of with different organizations for an initiative to speed up world collaborative analysis on COVID-19 by means of entry to high-quality, real-time multi-center affected person datasets. WEST LAFAYETTE, Ind. – In the course of the COVID-19 pandemic, well being care professionals and researchers have been confined largely to utilizing native and nationwide datasets to review the influence of comorbidities, pre-existing remedy use, demographics and varied interventions on illness course. Now, Purdue College is becoming a member of with different organizations for an initiative to speed up world collaborative analysis on COVID-19 by means of entry to high-quality, real-time multi-center affected person datasets. The Nationwide Science Basis has supplied funding to develop the Information Analysis for COVID-19 Emergency Analysis (RECovER) initiative. Researchers are testing predictions of artificial intelligence drug discovery platforms from the lab of Gaurav Chopra, an assistant professor of analytical and bodily chemistry in Purdue's Faculty of Science, on affected person datasets throughout a community of well being care establishments.


Artificial Intelligence: Has the Bible warned us against the rise of AI? – IAM Network

#artificialintelligence

Artificial intelligence (AI) is often at the heart of big blockbusters, as evidenced by the popularity of The Matrix and Terminator franchises. The Hollywood films typically envision scenarios in which sentient machines rise against their human masters. And though scientists are yet to develop machines that can truly think for themselves, many fear science-fiction could one day become science-reality. Paul Begley, a Christian evangelist from West Lafayette in Indiana, US, believes fears of artificial intelligence can be addressed by reading the Bible.Pastor Begley is the host of The Coming Apocalypse, a programme linking modern-day events to biblical scripture that is broadcast on some US TV channels.During his latest broadcast, the preacher has bizarrely claimed AI technology is linked to biblical prophecies of the Second Coming of Jesus Christ.He said: "Today we're going to be looking at AI technology; how it's becoming part of the biblical narrative; how that in the last days AI technology will be used to judge and convinct, and maybe even execute the human race.READ MORE: Bubonic Plague: How infection'was prophesied in Bible' Artificial Intelligence: Does the Bible warns us of rogue AI technology?


Artificial intelligence is energy-hungry. New hardware could curb its appetite.

#artificialintelligence

WEST LAFAYETTE, Ind. -- Just to solve a puzzle or play a game, artificial intelligence can require software running on thousands of computers. That could be the energy that three nuclear plants produce in one hour. A team of engineers has created hardware that can learn skills using a type of AI that currently runs on software platforms. Sharing intelligence features between hardware and software would offset the energy needed for using AI in more advanced applications such as self-driving cars or discovering drugs. "Software is taking on most of the challenges in AI. If you could incorporate intelligence into the circuit components in addition to what is happening in software, you could do things that simply cannot be done today," said Shriram Ramanathan, a professor of materials engineering at Purdue University.